Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
The findings make clear that the race to use AI to find network vulnerabilities has "already begun"
Cybercriminals were recently caught using a zero-day exploit believed to have been discovered and developed by artificial intelligence, Google announced Monday.
The announcement comes as major AI companies, including Anthropic and OpenAI, have begun testing newer models that can find and exploit critical software vulnerabilities better than most humans.
Google Threat Intelligence Group researchers detailed the development in a report released Monday. Zero-day exploits are considered the most serious type of security flaw because they are not detected by security companies and have no known fixes.
[...] Google concluded that Anthropic's Claude Mythos model — which has already found thousands of vulnerabilities across every major operating system and web browser — was most likely not used to develop the zero-day exploit.
Also at TechRepublic and API.
Previously: Mozilla Says 271 Vulnerabilities Found by Mythos Have "Almost No False Positives"
A stainless steel breakthrough from the University of Hong Kong (HKU) could help solve one of the biggest problems facing green hydrogen: how to build electrolyzers that are tough enough for seawater, yet cheap enough for large scale clean energy.
Led by Professor Mingxin Huang in HKU's Department of Mechanical Engineering, the team developed a special stainless steel for hydrogen production (SS-H 2 ). The material resists corrosion under conditions that normally push stainless steel past its limits, making it a promising candidate for producing hydrogen from seawater and other harsh electrolyzer environments.
The discovery, reported in Materials Today in the study "A sequential dual-passivation strategy for designing stainless steel used above water oxidation," builds on Huang's long running "Super Steel" Project. The same research program previously produced anti-COVID-19 stainless steel in 2021, along with ultra strong and ultra tough Super Steel in 2017 and 2020.
Green hydrogen is made by using electricity, ideally from renewable sources, to split water into hydrogen and oxygen. Seawater is an especially tempting feedstock because it is abundant, but it brings a serious materials problem: salt, chloride ions, side reactions, and corrosion can quickly damage electrolyzer components.
Recent reviews of direct seawater electrolysis continue to highlight the same core challenge. The technology could provide a more sustainable route to hydrogen, but corrosion, chlorine related side reactions, catalyst degradation, precipitates, and limited long term durability remain major obstacles to commercial use.
That is where SS-H 2 could matter. In a salt water electrolyzer, the HKU team found that the new steel can perform comparably to the titanium based structural materials used in current industrial practice for hydrogen production from desalted seawater or acid. The difference is cost. Titanium parts coated with precious metals such as gold or platinum are expensive, while stainless steel is far more economical.
For a 10 megawatt PEM electrolysis tank system, the total cost at the time of the HKU report was estimated at about HK$17.8 million, with structural components making up as much as 53% of that expense. According to the team's estimate, replacing those costly structural materials with SS-H 2 could reduce the cost of structural material by about 40 times.
Stainless steel has been used for more than a century in corrosive environments because it protects itself. The key ingredient is chromium. When chromium (Cr) oxidizes, it creates a thin passive film that shields the steel from damage.
But that familiar protection system has a built in ceiling. In conventional stainless steel, the chromium based protective layer can break down at high electrical potentials. Stable Cr 2 O 3 can be further oxidized into soluble Cr(VI) species, causing transpassive corrosion at around ~1000 mV (saturated calomel electrode, SCE). That is well below the ~1600 mV needed for water oxidation.
Even 254SMO super stainless steel, a benchmark chromium based alloy known for strong pitting resistance in seawater, runs into this high voltage limit. It may perform well in ordinary marine settings, but the extreme electrochemical environment of hydrogen production is a different challenge.
The HKU team's answer was a strategy called "sequential dual-passivation." Instead of relying only on the usual chromium oxide barrier, SS-H 2 forms a second protective layer.
The first layer is the familiar Cr 2 O 3 based passive film. Then, at around ~720 mV, a manganese based layer forms on top of the chromium based layer. This second shield helps protect the steel in chloride containing environments up to an ultra high potential of 1700 mV.
That is what makes the finding so striking. Manganese is usually not viewed as a friend of stainless steel corrosion resistance. In fact, the prevailing view has been that manganese weakens it.
"Initially, we did not believe it because the prevailing view is that Mn impairs the corrosion resistance of stainless steel. Mn-based passivation is a counter-intuitive discovery, which cannot be explained by current knowledge in corrosion science. However, when numerous atomic-level results were presented, we were convinced. Beyond being surprised, we cannot wait to exploit the mechanism," said Dr. Kaiping Yu, the first author of the article, whose PhD is supervised by Professor Huang.
The path from the first observation to publication was not quick. The team spent nearly six years moving from the initial discovery of the unusual stainless steel to the deeper scientific explanation, then toward publication and potential industrial use.
"Different from the current corrosion community, which mainly focuses on the resistance at natural potentials, we specializes in developing high-potential-resistant alloys. Our strategy overcame the fundamental limitation of conventional stainless steel and established a paradigm for alloy development applicable at high potentials. This breakthrough is exciting and brings new applications," Professor Huang said.
The work has also moved beyond the laboratory. The research achievements have been submitted for patents in multiple countries, and two patents had already been granted authorization at the time of the HKU announcement. The team also reported that tons of SS-H 2 based wire had been produced with a factory in Mainland China.
"From experimental materials to real products, such as meshes and foams, for water electrolyzers, there are still challenging tasks at hand. Currently, we have made a big step toward industrialization. Tons of SS-H 2 -based wire has been produced in collaboration with a factory from the Mainland. We are moving forward in applying the more economical SS-H 2 in hydrogen production from renewable sources," added Professor Huang.
Although the SS-H 2 study was published in 2023, its core problem has only become more relevant. Newer seawater electrolysis research continues to focus on the same bottlenecks: corrosion resistant materials, long lasting electrodes, chlorine suppression, and system designs that can survive real seawater rather than ideal laboratory solutions. A 2025 Nature Reviews Materials review described direct seawater electrolysis as promising but still held back by corrosion, side reactions, metal precipitates, and limited lifetime.
Other recent work has explored stainless steel based electrodes with protective catalytic layers, including NiFe based coatings and Pt atomic clusters, to improve durability in natural seawater. Researchers have also reported corrosion resistant anode strategies built on stainless steel substrates, showing that stainless steel remains a major focus in the effort to make seawater electrolysis more practical.
This newer research does not replace the SS-H 2 discovery. Instead, it reinforces why the HKU team's approach is important. The field is still searching for materials that can survive the punishing mix of saltwater chemistry, high voltage, and industrial operating demands. SS-H 2 stands out because it attacks the problem not only with a coating or catalyst, but with a new alloy design strategy that changes how stainless steel protects itself.
SS-H 2 is not yet a plug and play solution for the hydrogen economy. The team has acknowledged that turning experimental materials into real electrolyzer products, including meshes and foams, still involves difficult engineering work.
Even so, the promise is clear. A stainless steel that can withstand high voltage seawater conditions while replacing expensive titanium based components could make hydrogen production cheaper, more scalable, and easier to pair with renewable energy.
For a field where cost and durability often decide whether a technology can leave the lab, a steel that builds its own second shield may be more than a materials science surprise. It could become a practical step toward cleaner hydrogen at industrial scale.
Journal Reference: DOI: 10.1016/j.mattod.2023.07.022
https://www.lttlabs.com/articles/2026/05/12/ups-exploration
Our company has always had many UPSs around for the convenience and business case of not suddenly losing a ton of work. We've been intrigued to check them out further, but we've been wary of connecting any of them to measurement equipment considering the high voltages involved. There is a serious potential they could damage equipment or ourselves.
Despite all that, we're throwing caution to the wind to check out some UPSs from around the office. There are so many directions that UPS/surge testing could go so this article will cover the test setup and interesting exploration results.
For years workers were taught to endure stress in silence. Now, rising burnout is forcing employers and governments to confront the cost of modern work:
Hayley Hughes said yes to everything. She worked in health care at a Queensland medical centre, managing nine GPs and up to 18 staff, while overseeing a change of ownership.
[...] Over many months of an intense workload, Hayley started to feel physically ill from the stress. She experienced brain fog, a racing heart and insomnia.
[...] The path to burnout recovery can include mental health leave, seeing a doctor, maybe receiving a diagnosis of anxiety or depression, medicating yourself, and returning to work ready to roll again.
Or — like Jeffrey and Hayley — you could change roles, reduce hours or move into less senior or less stressful positions.
[...] While taking control of burnout can help recovery, more people are asking if the onus should be on employers.
With almost half of Australian workers feeling burnt out, experts are asking how workplace culture and systems contribute to, or even cause, exhaustion, and whether systemic change might lead to a reduction in burnout overall.
The question of who is responsible for burnout matters. Whether we define burnout as an individual failing or a systemic one determines how we treat it. And, in turn, determines where the responsibility, and the cost, lands.
Burnout has entered the cultural lexicon with a thoroughness that has outpaced its clinical definition.
It is discussed in podcast episodes and performance reviews, in resignation letters and therapy sessions, on TikTok and in medical journals. Yet despite its ubiquity, or perhaps because of it, there remains no consensus on what burnout actually is and, critically, whose responsibility it is to prevent and treat it.
[...] "From my experience, unless the condition is part of the psychiatric manual, it doesn't exist. Insurers won't recognise burnout," he says. "What happens instead is people take their accrued leave, or [seek a diagnosis of] depression in order to get sick leave."
This pathway comes at a cost. Depression is classified as a disorder of the individual, a medical condition located in the person's brain, body, and history.
When a burned-out worker is diagnosed as depressed, the implied cause shifts from the workplace to the worker. The worker uses their own leave, sees a doctor on their own dime, takes medication, pays for therapy and formulates individual coping strategies.
When they recover, they often return to a workplace unchanged from the one where the injury occurred in the first place.
[...] Longitudinal studies suggest certain personality traits can increase the risk of burnout.
But the data is also clear that, over the long term, personality makes a relatively small impact and workplace culture and expectations are far more significant in determining who burns out.
By framing burnout as an individual worker problem, organisations do not have to examine deeper systemic issues like toxic work cultures, unrealistic expectations, or inadequate support structures.
The employee — not the employer — is paying the price.
[...] "In any service work if you are deeply connected to the cause, you are more at risk of burnout," she says.
Jill scoffs at resilience training, mindfulness, wellness programs and apps, as a satisfactory measure to fix burnout.
"The whole idea of someone being resilient is ridiculous," she says. "To whose standard?"
She sees restorative justice as a model for treating burnout. The worker and employer talk about the conditions that lead to burnout and explore new ways of working that may alter the workplace and make it less harmful for others.
The clearest example in Australia of what happens when governments and institutions accept burnout is their problem to solve is in education.
Teacher burnout in Australia is not new. But it has reached a point where its consequences are too visible and too costly to keep attributing to individual teacher inadequacy.
[...] The National Teacher Workforce Action Plan is a federal government attempt to address burnout on a systemic level. It seeks to do this by reducing workloads, improving retention and increasing teacher support. According to the plan, the key strategies focus on relieving administrative burdens, expanding mentorship and providing financial incentives.
[...] Dr Ben Arnold, an associate professor in educational leadership at Deakin University, says teachers have higher levels of meaning in their work than many others, but it comes at a cost.
"They have higher workload, higher pace, higher cognitive demands, and very high emotional demands. And then there are all these other non-teaching things as well," he says.
These include communication with parents that goes way beyond the usual check-in at parent-teacher night, a greater amount of admin and external testing.
"Teachers often describe earlier decades in Australian education as a period when they experienced greater professional autonomy and public trust," says Arnold, whose research focuses on how education policies and working conditions in schools impact the health, sustainability and diversity of teachers.
Increasing emphasis on performance measurement, accountability, external testing and compliance has introduced additional pressures and administrative demands, he says, and teacher goodwill holds the system together.
[...] "We see there's a link between teacher burnout and student achievement," says Collie. "It is a system thing."
[...] "Mindfulness, taking time off: these can keep burnout at bay. But if you are working in a toxic workplace, you need to address that," he says. "Leaving one toxic workplace for another will not help."
[...] The cleanest individual solutions to burnout — leave the job, take months off, downshift — are available only to those with financial security.
For everyone else, the question of systemic change is not a luxury. It's the only real option.
The developer in question, Pawel Jarczak, voluntarily shuttered his “OrcaSlicer-BambuLab” project, which would have restored direct control between Bambu Lab 3D printers and OrcaSlicer. Last year, Bambu Lab deemed these types of third-party integrations a risk to its infrastructure, saying its cloud servers were inundated with roughly per day. OrcaSlicer was singled out as the main source of the rogue traffic.
Rossmann’s video contained a link to the Consumer Rights Wiki to explain the issue at hand to his audience, who may not be familiar with 3D printing but are avid defenders of Right to Repair. Right to Repair is a global consumer rights movement built on the principle that if you bought it, you own it. And if you own a thing, like a Bambu Lab 3D printer, you should have the freedom to fix, modify, or maintain the product as you see fit. Manufacturers shouldn’t be allowed to gatekeep the ability to fix a product, and they should provide manuals, schematics, and diagnostic software to allow end users to fix their own machines.
Bambu Lab printers are difficult to mod and/or repair yourself, with parts that are often glued in place. The original Bambu Lab X1 Carbon was notorious for its non-replaceable carbon rods that could wear out, and a hotend nozzle that needed a screwdriver and a tube of thermal paste to swap out if you wanted to avoid buying a $35 hotend just to change the nozzle size. These difficult parts were notably replaced with more user-friendly parts with the introduction of the H2D and subsequently, the X2D.
Rossman has not started a crowdfunding site yet, stating in the comments that he wants to prove to Jarczak that he has supporters willing to put their money where their mouth is. The video had over 54,000 views so far, with commenters vowing to back the case as requested.
There are already hundreds of thousands of large language models (LLMs) in existence with a few dozen commercial systems dominating the market. Between options such as GPT-4, Claude and Gemini, many people have their favorite, especially when it comes to creative tasks such as writing.
Those preferences, however, are likely entirely in the eye of the beholder. According to new research from Duke University, the creative outputs of commercial LLMs are more similar to each other than users might hope. When challenged with three standard tasks assessing creativity, answers from commercial LLMs are much more alike than their human counterparts.
"People might wonder if different LLMs will take them in different directions with the same prompts for creative projects," said Emily Wenger, the Cue Family Assistant Professor of Electrical and Computer Engineering at Duke. "This paper basically says no. LLMs are less creative as a population than humans."
[...] One seminal paper in this emerging field conducted by Anil Doshi and Oliver Hauser found that writers who used GPT-4 produced more creative stories than humans working alone. However, the same study showed that those LLM-aided stories were more similar to each other than were stories from human writers working solo.
[...] "Commercial LLMs have all been trained on the same dataset—the entirety of the internet—and they all have the same goal," Wenger said. "It seemed likely to me that this would limit the amount of diversity we'd see in their creativity, so I decided to find out."
[...] "Significant empirical research on the past few decades highlight how much human creativity depends on variability," said Yoed Kenett. "The problem, as we and others are increasingly showing, is that while LLMs appear to generate extremely original outputs, they are overly homogenized and not variable in their responses. This could have detrimental long-term impact on human creative thinking and thus must be addressed."
The results, which aimed to measure the variability and originality in responses between LLMs and people, were clear. While individual LLMs might outperform individual people in levels of creativity, as a whole, the algorithms' responses were much more similar to each other than the people's. Importantly, altering the LLM system prompt to encourage higher creativity only slightly increased their variability—and human responses still won out.
"This work has broad implications as people continue adopting and integrating LLMs into their daily life," Wenger said. "Over reliance on these tools will smooth the world's work toward the same underlying set of words or grammar, tending to make writing all look the same."
"If you're trying to come up with an original concept or product to stand out from the crowd," Wenger continued, "this work highly suggests you should bring together a diverse group of people to brainstorm rather than relying on AI."
Journal Reference: "Large language models are homogeneously creative." Emily Wenger and Yoed N. Kenett. PNAS Nexus, 2026, 5, pgag042. DOI: 10.1093/pnasnexus/pgag042
Publishing in Communications Chemistry, researchers from Kyushu University have discovered a simple method of generating hydrogen gas by mixing methanol, sodium hydroxide, and iron ions, then irradiating the solution with UV light. Furthermore, the catalytic activity of the reaction is comparable to that of some previously reported systems that use organometallic and heterogenous catalysts. The team also demonstrated that the method could generate hydrogen gas from other alcohols and biomass-derived materials, such as glucose and cellulose.
From microchip circuits to the medicine you take when you fall ill, everything in our lives requires catalysts. Naturally, research and development of catalysts are not only lucrative but essential to maintaining our modern lifestyle. Catalysts are usually composed of a matrix of metals and compounds organized in sophisticated structures. As a result, while catalysts can be very efficient, they are also potentially expensive and complicated to make.
"Our research group has long been interested in developing catalysts from abundant and inexpensive elements. This time we turned our eyes toward sustainability and investigated the utility of common metals as catalysts for producing hydrogen gas," explains Associate Professor Takahiro Matsumoto of Kyushu University's Faculty of Engineering who led the study. "Hydrogen is a clean energy carrier because it does not produce carbon dioxide when used. However, most hydrogen today is made from fossil fuels, so we must develop sustainable methods to produce it to have a positive ecological impact."
The team began by experimenting with generating hydrogen gas from methanol using organometallic iron complexes. Alcohols, such as methanol, are compounds that contain hydrogen which can be removed through a process called alcohol dehydrogenation. However, the process usually requires complex catalysts made from rare or expensive metals.
While conducting their experiments, the team encountered some unusual results.
"In what can only be considered incredible serendipity, we found in one of our control experiments mixing methanol, iron ions, and sodium hydroxide, and then irradiating it with UV light, generated a considerable amount of hydrogen gas," continues Matsumoto. "It was hard to believe at first. We validated these findings, experimented further, and confirmed them. We found that the hydrogen production rate was 921 mmol of hydrogen per hour per gram of catalyst. This number is comparable to the best catalysts reported to date."
The researchers also found that their new system could produce hydrogen from other alcohol species as well as from materials such as glucose, starch, and cellulose.
The team intends to develop their new findings in hopes that further optimization will lead to more sustainable hydrogen technologies.
"One limitation of this study is that we still do not know the reaction mechanism in detail. Additionally, although we observed hydrogen generation from other materials, the catalytic activity for these substrates is still low," concludes Matsumoto. "Finally, this reaction is so simple that anyone, from elementary school students to curious adults, can reproduce it. I encourage everyone to try it out, and I hope it inspires people to pursue careers in the sciences."
Journal Reference: "Iron ion enables photocatalytic hydrogen evolution from methanol," Masaya Sakurai, Yudai Kawasaki, Yuki Itabashi, et al., Communications Chemistry, https://doi.org/10.1038/s42004-026-02009-3
3D printing brings design to life after four decades:
Unlike conventional zippers that connect two flat surfaces in 2D, the Y-Zipper joins three flexible arms into a rigid 3D triangular tube. When open or unzipped, the structure behaves like soft plastic strips or floppy tentacles, with each arm flexing and twisting independently. Once zipped shut with a custom slider, however, the arms interlock to form a stiff, beam-like structure capable of supporting loads.
That ability to switch between soft and rigid states is particularly relevant for robotics and deployable systems. Engineers often struggle to combine flexibility and structural stiffness within the same mechanism. Soft robotic systems adapt well to unpredictable environments but often lack strength, while rigid systems provide stability at the cost of flexibility. MIT’s design attempts to combine both.
The researchers demonstrated a robotic quadruped with legs capable of changing height and stiffness by actuating the zipper mechanism with motors. Such systems could help robots navigate uneven terrain by dynamically adjusting limb geometry in response to the environment.
The team also tested the system in deployable structures. In one demonstration, they used the Y-Zipper to rapidly assemble a tent-like structure, with the three-sided mechanism serving as both the structural support frame and the joining system. According to the team, setup time dropped from roughly six minutes to one minute and 20 seconds because the zipper effectively snaps the structure into place.
Medical applications are another possible target. The researchers created a wrist-cast prototype that wrapped the mechanism around a wrist cast, allowing users to loosen it during the day for comfort before tightening it again at night for support.
Beyond engineering applications, the system can also produce dynamic moving structures for art and design. One prototype resembled a mechanical flower that “bloomed” as a motor zipped the structure upward.
Durability testing showed the mechanism surviving roughly 18,000 zip-and-unzip cycles before failure. According to the researchers, the structure’s elastic behavior helps distribute stress across the assembly instead of concentrating it in a single area.
The team evaluated versions of the structure made from popular 3D-printing materials, polylactic acid (PLA) and thermoplastic polyurethane (TPU). PLA handled heavier loads more effectively, while TPU provided greater flexibility. Future versions could use stronger materials such as metal and scale to much larger sizes. Researchers also suggested possible aerospace applications, including deployable spacecraft structures and robotic systems capable of grabbing rock samples during exploration missions.
The work was presented at the ACM Conference on Human Factors in Computing Systems (CHI) in April and detailed in a paper titled "Y-Zipper: 3D Printing Flexible–Rigid Transition Mechanism for Rapid and Reversible Assembly."
[Ed. note: Interesting video of it in action and the lead author provides the STL files in case you want to print your own]
New findings from an analysis of more than 20,000 patients across three major NIH studies show that elevated Lipoprotein(a) [Lp(a)] is linked to ongoing cardiovascular risk, even after standard treatments.
Lp(a) is a cholesterol-carrying particle found in the blood. It resembles LDL, often called “bad” cholesterol, but includes an added protein that may increase its harmful effects on the heart.
High Lp(a) levels are mainly inherited and can raise the risk of cardiovascular disease even when routine cholesterol levels appear normal. About one in five people has elevated Lp(a), though most do not know it because it rarely causes symptoms. While its connection to heart disease is well known, its ability to predict risk in people with and without existing conditions remains unclear.
The results were presented as late-breaking research at the Society for Cardiovascular Angiography & Interventions (SCAI) 2026 Scientific Sessions and the Canadian Association of Interventional Cardiology/Association Canadienne de cardiologie d’intervention (CAIC-ACCI) Summit in Montreal.
The study examined stored plasma samples from 20,070 participants aged 40 and older who were enrolled in the ACCORD, PEACE, and SPRINT NIH randomized trials. Researchers analyzed all samples in a specialized laboratory using a standardized test and reported results in nmol/L.
Participants were categorized by Lp(a) levels (<75, 75 to 125, 125 to 175, or ≥175 nmol/L) and by whether they had preexisting heart disease. Statistical models accounted for factors such as age, health conditions, lipid levels, and treatments.
Participants had an average age of about 65 years, and roughly 65% were men. The main outcome measured was major adverse cardiovascular events (MACE), which included heart attack, stroke, coronary revascularization, or death from heart-related causes.
Over a median follow-up of nearly 4 years, 1,461 events (7.3%) occurred. People with Lp(a) levels of 175 nmol/L or higher had about a 31% higher risk of major cardiovascular events, a 49% higher risk of cardiovascular death, and a 64% higher risk of stroke. This level was not linked to a higher risk of heart attack.
The increased risk was more noticeable in people who already had heart disease, with about a 30% higher risk, compared to an 18% higher risk in those without existing heart disease.
“For the first time, we can quantify the specific level of Lp(a) that puts patients at a significantly higher risk of major cardiovascular events, especially stroke and death,” said Subhash Banerjee, MD, FSCAI, interventional cardiologist at Baylor Scott & White in Dallas, Texas. “Regardless of age, patients can take a simple, low-cost blood test to determine whether they have this genetic condition. If elevated Lp(a) levels are detected, they should work closely with their healthcare provider to aggressively lower LDL cholesterol and manage other cardiovascular risk factors as much as possible. This knowledge is especially valuable as new targeted treatment options are on the horizon.”
The researchers added that analyzing stored biological samples can reveal new insights from completed trials. Future work will focus on specific patient groups, including those with chronic kidney disease and peripheral artery disease.
Reference: “Lipoprotein(a) Identifies Residual Cardiovascular Risk in NIH Randomized Trials” by Subhash Banerjee, 24 April 2026, Society for Cardiovascular Angiography & Interventions (SCAI) 2026 Scientific Sessions.
Kdenlive 26.04.1 fixes a serious project file vulnerability and ships stability improvements across editing, audio, subtitles, transitions, and project recovery.
https://linuxiac.com/kdenlive-26-04-1-video-editor-fixes-serious-project-file-security-flaw/
Kdenlive 26.04.1 is the first maintenance update in the 26.04 series, introducing a key security fix and several stability and workflow enhancements for the open-source video editor.
The primary update resolves a serious vulnerability related to malicious .kdenlive project files. The issue, identified during a security audit, could allow remote code execution when opening a compromised project file.
Due to the severity of this issue, users are strongly encouraged to upgrade to version 26.04.1. If immediate updating is not possible, avoid opening project files you did not create.
This release also introduces a security feature planned for Kdenlive 26.08, which will alert users if unexpected input is detected in a project file. While the vulnerability in 26.04.1 has been resolved, according to the devs, this upcoming check provides an additional layer of protection during project loading.
In addition to the security fix, Kdenlive 26.04.1 delivers several reliability enhancements. The update resolves issues where the editor could access uninitialized media recorders or audio devices before permissions were granted. On macOS, the release improves permissions handling, updates the minimum supported version in the Info.plist file, and explicitly requests microphone access.
The update resolves a clip monitor issue where the playhead could become stuck when switching between clips. Sequence handling is also improved, addressing repeated resize confirmation messages and issues with dropping sequences without audio into the timeline.
Subtitle handling is improved, too, with a fix for crashes when cutting subtitles on higher layers and a limit on the number of supported subtitle layers. Plus, the release corrects transition preview generation, prevents incorrect previews, and switches transition previews to GIF format since most Kdenlive binaries do not support WebP encoding.
The update also includes multiple crash fixes, addressing issues such as switching between icon and list views in the transitions list, opening documents with an uninitialized core profile, adding the first clip in certain audio-level scenarios, and closing the application via the welcome screen close button.
Additional changes address archiving title files with images, opening recent projects with the correct profile from the welcome screen, tab order in the color clip dialog, and bin icon mode behavior when working with folders, zones, sequences, and subclips.
For more details, see the release announcement. Kdenlive 26.04.1 is available from the project's download page and through distribution package managers.
The game asks players to find the least worst options for a shipping chokepoint:
It's no fun living through the global energy shock and growing economic crisis that has ensued since the conflict choked off shipping through the Strait of Hormuz. But it can be enlightening to play through the new game Bottleneck that forces players to choose among the 2,000 ships still stuck in and around the strait—all while actual news reports and real maritime transit data help tell the story of the unfolding events.
The free browser-based game challenges players to act as a fictional maritime coordinator by selecting a handful of ships that get to pass through the strait each day. Most decisions come with serious costs or trade-offs, whether it's paying the toll imposed by the Iranian government that has claimed authority over the strait or antagonizing Iran or the United States while pushing either side toward widening the war. Failure to push through enough specific shipments can spark individual crises involving the price of oil, food, and water security, and a countdown to famine in many countries.
"The game does not ask whether you are smart enough to solve the crisis," said Jakub Gornicki, the journalist and artist who developed the game, in a post. "It asks what kind of damage you choose when every option has a cost."
Players must also manage relations with factions beyond Tehran and Washington, such as the Gulf States, the United Nations World Food Programme, and the shipping industry. Prioritizing shipments of crude oil and liquefied natural gas may satisfy the US's interest in keeping energy prices in check, but it will erode the trust of the United Nations, which would rather see more ships carrying fertilizer to stave off future famine.
That may sound like a lot to wrap your head around for a game that is playable in 15 to 20 minutes, but it's a surprisingly accessible experience for the most part. The game serves up plenty of explanations and news articles that you can click on to better understand the real-world context and in-game consequences.
However, each ship approved for transit tends to carry a greater cost or trade-off as the game progresses over 10 playable days between March 3 and April 13, 2026. You have the choice of not sending any ships through the strait on any given day, but that can quickly lead to dismal endgame results such as "empty shelves" and "desalination collapse" for Gulf States facing food insecurity and a lack of fresh water from energy-starved desalination plants.
If you manage to muddle through and keep all the factions from spiraling, the endgame results still provide plenty of charts and numbers to remind you that the real-life Strait of Hormuz crisis is far from over. Even squeezing through several dozen ships over 10 days—the best-case shipping scenario in the game—remains a far cry from the pre-war average of 130 ships passing through the strait each day. The inadequacy of that shipping rate continues to have daily real-world consequences.
Gornicki designed and built the game by himself over 17 days while executing the game's underlying code with the help of an AI coding tool, which he described in a press kit as being "audited and corrected at every step." He also incorporated more than 125 verified and linked news articles, along with shipping data from sources such as Windward Maritime Intelligence and Lloyd's List.
"The chokepoint is not a story you read once and put down—it returns every week, in fuel prices, in fertilizer shortages, in food security in places far from any tanker," Gornicki said. "I wanted to give people a form of this reporting they could not skim past."
Link between pollinators and diverse landscapes is a two-way street:
Ecologists have long seen a strong connection between biodiversity and pollinators – the butterflies, birds, bats, bees and other insects that help the flowers they snack on by transferring pollen from male anthers to female stigma.
Previous research has shown diverse landscapes draw more pollinators, as a wider variety of pollen and nectar attracts attention from a wider variety of animals – some which only feed on certain plants. Essentially, pollinators go where the food is, said Brian Wilsey, a professor of ecology, evolution and organismal biology at Iowa State University.
A recent study by Wilsey and doctoral graduate Nathan Soley showed the converse is also true: Pollinators support diversity in plant communities. In an article published this month in Ecology, Wilsey and Soley described a four-year experiment they conducted in plots of restored prairie that examined how plant diversity was affected by purposely protecting wildflowers from pollinators. Among animal-pollinated plants, viable seed production fell by 50% and the diversity of species fell by 27%, they found.
"Our study is the first we are aware of to show that plant biodiversity at the community level can be limited by a lack of pollinators," Wilsey said.
[...] The study's results suggest significant declines in pollinators could cause biodiversity losses that further reduce pollinator populations, causing a self-reinforcing downward trend in both that the researchers call a "plant-pollinator extinction vortex."
"Before this study, I would have never thought that pollinators were this important to maintaining biodiversity. It really opened my eyes," Wilsey said.
Pollinators are essential because of their role in food production. According to the U.S. Department of Agriculture, about 35% of global food crops depend on animal pollination to reproduce, making the seeds and fruits that humans harvest.
In addition to providing critical support for pollinators and other wildlife, diverse landscapes improve water and soil quality. In prairies, which used to cover most of Iowa, a variety of life makes ecosystems more resilient to droughts, floods and invasive species. Beyond pollinators, the known pro-biodiversity factors include low nutrient availability, proximity to other quality habitat and a lack of human degradation, Wilsey said.
One major implication of knowing pollinators help maintain plant biodiversity is the need to consider the presence of pollinator habitat when establishing prairie restoration areas. That's especially true for urban projects, Wilsey said. The human-enhanced pollination plots in the study showed no change in biodiversity when compared to the control plots, an indication that there were sufficient bees and other pollinators in the area. But that's less likely to be the case in more human-impacted environments.
Journal Reference: Nathan M. Soley, Brian J. Wilsey, Pollinators maintain biodiversity in assembling plant communities https://doi.org/10.1002/ecy.70369
The cosmological constant is a term physicists use to describe the energy pushing the universe to expand faster over time. Despite its simple definition, it represents one of the deepest unsolved problems in physics.
Measurements show that this energy exists, but its strength is astonishingly small. That is where the trouble begins. Quantum field theory (QFT), the framework that successfully explains particles and forces, predicts that empty space should contain an enormous amount of energy.
In fact, the theoretical value is so large it would cause the universe to rip itself apart almost instantly. Instead, the real universe expands at a much calmer pace, allowing galaxies, stars, and planets to form. This gap between theory and observation is often described as one of the worst predictions in physics.
Researchers at Brown University have proposed a new explanation for this mismatch.
The team found that the mathematics behind a simple model of quantum gravity closely mirrors the equations used to describe the quantum Hall effect, an unusual state of matter where electrical flow behaves with remarkable precision.
In the quantum Hall effect, electrical conductance remains fixed even when the material contains defects. This stability comes from topology, which refers to the mathematical structure or “shape” of a quantum state. The researchers identified a similar topological feature in the Chern-Simons-Kodama state, a proposed ground state for quantum gravity.
“What we’ve shown is that if space-time has this non-trivial topology, then it resolves one of the deadliest problems of the cosmological constant,” said study co-author Stephon Alexander, a professor of physics at Brown. “All the quantum perturbations that should blow up the value of the cosmological constant are rendered inert by this topology, which keeps the constant’s value stable.”
[...] Alexander has spent years studying Chern-Simons-Kodama (CSK) theory, a proposed state of quantum gravity that grows out of quantum field theory. Scientists have yet to settle on a quantum theory of gravity — a theory that explains how gravity works at the tiniest scales — but the CSK state is one of the more straightforward candidates, according to Alexander.
“It’s a really conservative approach to quantizing gravity,” he said. “This is the approach used by people like Dirac, Schrödinger, and Wheeler. It’s just good, old-fashioned quantization.”
Alexander had been aware of some mathematical similarities between CSK and the math behind the quantum Hall effect, but he wasn’t entirely sure what to make of them. That’s when he turned to Hui, an assistant professor at Brown who specializes in topological systems like those that emerge in the quantum Hall effect.
[...] Together, the researchers were able to show that the cosmological constant has a similar “topological protection” in the CSK state as electrical conductivity has in the quantum Hall effect. The quantum Hall effect emerges when electricity flows through very thin materials in the presence of a magnetic field. Imagine a flat, two-dimensional piece of metal cut into a rectangular strip with an electric current running longways down the strip. Introducing a magnetic field produces a second voltage that runs perpendicular to the original current. This is known as a Hall voltage (named after Edwin Hall, who discovered it).
[...] “What we find is that this quantization of the electrical conductance in quantum Hall has an analog with the cosmological constant,” Hui said. “It also ends up becoming quantized for topological reasons. There turn out to be constraints in the theory that force the cosmological constant to take certain allowed quantized values.”
There’s much more work to be done to fully flesh out a topological solution to the cosmological constant problem, Alexander says. But finding a potential solution to the gravitational aspect of the problem is a crucial start. At the very least, he says, the work bolsters the profile of the CSK state as a candidate for a long-sought theory of quantum gravity.
“We took something old, which is this conservative, canonical approach to quantum gravity, and discovered something new that had been there all along,” Alexander said. “Now we’re working on a bigger picture of how this phenomenon works.”
Reference: “Cosmological Constant from Quantum Gravitational 𝜃 Vacua and the Gravitational Hall Effect” by Stephon Alexander, Heliudson Bernardo and Aaron Hui, 17 April 2026, Physical Review Letters. DOI: https://doi.org/10.1103/rzz5-p4f4
NASA is serious about taking more shots on goal, but some of them need to start landing:
NASA's goal of reaching the Moon's surface as many as 21 times over the next two and a half years will require an overhaul of the agency's approach to buying lunar landers and success in rectifying the myriad problems that have, so far, caused three of the last four US landing attempts to falter.
It will also require improved oversight of NASA's industrial base and better management of a supply chain that has often failed to deliver on time.
These landers are separate from NASA's Human Landing System program, which has contracts with SpaceX and Blue Origin to develop and deliver human-rated landers to ferry crews to and from the lunar surface for the agency's Artemis program. Alongside the crew landers, dozens of robotic and cargo landings will deliver payloads to scout for a future Moon base and demonstrate technologies for larger vehicles, mining and resource utilization, and sustained operations during the two-week-long lunar night.
The fundamentals for high-frequency missions to the lunar surface are in place. NASA's Commercial Lunar Payload Services (CLPS) program, announced eight years ago this week, has assembled a roster of commercial providers to design and build robotic Moon landers. Through CLPS, NASA has contracted with US companies for 13 missions since 2019. Four of them have launched, and just one has completed a fully successful landing. Four more commercial landers are under construction now for launches in the second half of this year, but as is common in the space industry, their schedules have a history of delays, and some are likely to move into 2027.
Eight years in, CLPS is still in its "infancy," said Brad Bailey, NASA's assistant deputy associate administrator for exploration, during a recent lunar science workshop. Now, NASA is asking its lander providers, still learning to crawl, to rapidly learn to walk and run over the next two years.
NASA has penciled in nine lunar landings for next year, followed by 10 in 2028. NASA and its commercial partners must pick up the pace to come anywhere close to that. Isaacman acknowledged this in a hearing last week before the Senate Appropriations Committee's Subcommittee on Commerce, Justice, and Science.
"We have to do more than talk," Isaacman said. "For a very long time across all of NASA, we've talked a really good game but then we kind of sit and wait for our vendors and partners to deliver outcomes, and as a result we tend to be late and it tends to cost more, so how do you change that?"
One way, Isaacman said, is for NASA to offer more aid to the companies it is paying to develop Moon landers. "You start to embed subject matter experts across the supply chain to drive outcomes," he said.
[...] Aware of the hazards, NASA leaders at the start of the CLPS program likened the approach to a soccer or hockey team taking "shots on goal." Their thinking was that numerous landing attempts would allow companies to wring out their technology and improve their chances of sticking the next landing. The program's progress has been slow.
Today, NASA is in a race. The agency is charged with landing astronauts on the Moon before China, perhaps as soon as 2028, and following that achievement with the build-out of a permanent base near the south pole. Future CLPS missions will carry more sophisticated payloads, such as expensive rovers, hopping drones, communications relay satellites, and other pioneering tech demos that will underpin the Moon base design. If they are to succeed, NASA and its commercial partners will have to turn the page from taking shots on goal to hitting the net almost every time.
Facing this new urgency, NASA officials are eager to transition from demonstrating reliable lunar landers to delivering tangible infrastructure to the Moon's surface. Today's reality is that none of the lander contractors are there yet. There's still a lot to learn about landing and operating on the Moon.
This means NASA will need to take risks. The agency is still in an "exploratory phase," Merancy continued. "How do we get these systems out there, test them, and learn from them? That means dissimilar systems because I don't know which one's going to work well."
Paradoxically, NASA must take more shots on goal in order to stop taking them. That means buying more CLPS missions—and doing so quickly. An update posted by NASA on a federal government procurement website last week signaled the agency's intent to raise the ceiling of the CLPS contract from $2.6 billion to $4.2 billion. There are 13 companies eligible to compete for CLPS missions, but three—Astrobotic, Firefly Aerospace, and Intuitive Machines—have won the lion's share of CLPS contracts to date.
[...] It's now up to NASA's other CLPS providers to show they can reach the Moon, and all of them—including Firefly—must prove they can do so repeatedly. NASA and its contractors must cut Firefly's four-year lead time in half to ramp up to a monthly cadence in the next two years.
NASA will take a more paternalistic approach with the next round of CLPS orders. "When you are building, we need to hear the things that are slowing you down, and we're going to try to help you with those things," Carlos Garcia-Galan, head of NASA's Moon base program, told representatives of the CLPS companies at last week's LSIC meeting.
Good Job Dell and Lenovo! Hope Others Follow You
Only last week, we were talking about how LVFS, the firmware update service for Linux, had turned up the heat on vendors who didn't contribute their fair share.
To tackle that, the project has been going through a phased restrictions rollout that includes things like introducing fair-use download utilization graphs and removing detailed per-firmware analytics.
But that obviously wouldn't solve their lack of funding.
Luckily, two vendors have stepped up. Lenovo and Dell have both signed on as Premier sponsors for LVFS, each putting in $100,000 a year to help fund the project going forward.
They are also the first to reach this tier. Before now, only Framework Computer and the Open Source Firmware Foundation were on as Startup sponsors, contributing $10,000 a year.
Premier is the highest level of financial commitment any vendor can make to the project.
This update was announced yesterday, with the LVFS homepage already reflecting the update. Between the two of them, that's $200,000 a year going into a project that had been running almost entirely on the goodwill of the Linux Foundation and Red Hat.
[...] The vendors still treating LVFS like a free service they have no obligation to support should probably pay attention to what comes next. API access gets cut for non-Startup vendors in August. Automated upload limits follow in December.